7 research outputs found

    A study on creating games and virtual worlds from a software engineering perspective

    Get PDF
    The industry of developing games and virtual environments have come a long way, but it is not without its share of problems. In many ways, virtual worlds entail similar development processes as games, as they both require expertise from creative and technical facets. While facing similar difficulties as other commercial projects, creating games presents its own unique challenges due to its multi-disciplinary nature. Because of this, beginner and indie developers face complications in attempting to apply Software Engineering methods (which are efficiently tailored to guide general software projects) to reduce the chance of project failure. This study aimed to examine the construction of games and integrate it into the Software Process. First, the background of game development and Software Engineering were explored, deriving the introductory guides for both. Then, a guidelines draft is developed to offer advice on grafting aspects of the Software Process onto existing game or virtual world projects. They are aimed at independent developers as they represent the future workforce of the game development industry. To test the draft, an experimental project is carried out using the guide as a supplementary resource. The project resulted in a 3D action-RPG game platform that can be used as a starting point for advanced feature placement and content development. It demonstrated that the guidelines were educative but lacks depth and correctness. Revisions on these guidelines will transform it into a stepping stone resource that introduces novice game developers to integrating Software Engineering processes and practices into their projects. (Abstract by author

    An Improved Indoor Robot Human-Following Navigation Model Using Depth Camera, Active IR Marker and Proximity Sensors Fusion

    No full text
    Creating a navigation system for autonomous companion robots has always been a difficult process, which must contend with a dynamically changing environment, which is populated by a myriad of obstructions and an unspecific number of people, other than the intended person, to follow. This study documents the implementation of an indoor autonomous robot navigation model, based on multi-sensor fusion, using Microsoft Robotics Developer Studio 4 (MRDS). The model relies on a depth camera, a limited array of proximity sensors and an active IR marker tracking system. This allows the robot to lock onto the correct target for human-following, while approximating the best starting direction to begin maneuvering around obstacles for minimum required motion. The system is implemented according to a navigation algorithm that transforms the data from all three types of sensors into tendency arrays and fuses them to determine whether to take a leftward or rightward route around an encountered obstacle. The decision process considers visible short, medium and long-range obstructions and the current position of the target person. The system is implemented using MRDS and its functional test performance is presented over a series of Virtual Simulation Environment scenarios, greenlighting further extensive benchmark simulations

    An Improved Indoor Robot Human-Following Navigation Model Using Depth Camera, Active IR Marker and Proximity Sensors Fusion

    No full text
    Creating a navigation system for autonomous companion robots has always been a difficult process, which must contend with a dynamically changing environment, which is populated by a myriad of obstructions and an unspecific number of people, other than the intended person, to follow. This study documents the implementation of an indoor autonomous robot navigation model, based on multi-sensor fusion, using Microsoft Robotics Developer Studio 4 (MRDS). The model relies on a depth camera, a limited array of proximity sensors and an active IR marker tracking system. This allows the robot to lock onto the correct target for human-following, while approximating the best starting direction to begin maneuvering around obstacles for minimum required motion. The system is implemented according to a navigation algorithm that transforms the data from all three types of sensors into tendency arrays and fuses them to determine whether to take a leftward or rightward route around an encountered obstacle. The decision process considers visible short, medium and long-range obstructions and the current position of the target person. The system is implemented using MRDS and its functional test performance is presented over a series of Virtual Simulation Environment scenarios, greenlighting further extensive benchmark simulations

    Exploring the possibility of companion robots for injury prevention for people with diabilities

    No full text
    Abstract not availabl

    A human orientation tracking system using Template Matching and active Infrared marker

    No full text
    Human tracking research has been reinforced with the introduction of vision-based motion tracking sensors such as Microsoft Kinect and Intel RealSense since 2009. No longer limited to just wearable sensors, embedded environments and 2D camera feed, stereoscopic digital cameras and Infrared imaging has enabled perceivable depth and distance to contribute to geo-location, activity tracking, and automated detection of abnormal events. However, current motion tracking systems have a limited zone of detection and tracking, subjected to environmental lighting, lens occlusion and hardware factors. This study proposes and demonstrates the possibility of complementing these existing sensors with a simple human tracking system that can be used to direct their reorientation in order to maintain the tracked person within their optimal zone of detection. The prototype in this study utilizes an Infrared Camera and an active Infrared marker to track the pan orientation of a person by means of Template Matching - finding a template image match that indicates the angle of marker pan. By knowing the target's angle of pan, a Microsoft Kinect sensor can be autonomously relocated to the front of the person via the shortest path around. This implementation may also be retooled for other systems that have a smaller zone of detection

    A robotic telepresence system for full-time monitoring of children with cognitive disabilities

    No full text
    Children with cognitive disabilities including Autism Spectrum Disorder and Cerebral Palsy are exposed to greater possibilities of injuries due to their physical limitations and consequences of disruptive behavior. Their parents and caregivers who act as guardians, are compelled to monitor them full-time in order to prevent such injuries and potential damage to the surrounds from occurring. At the same time, assistive robotics has currently made strides in aiding rehabilitation of both physical and social impairments for children with cognitive disabilities. Companion robot prototypes have been developed and tested with marginal success for maintaining repetitive reinforcement exercises for these children in between treatment sessions. Due to special affinity of these children towards robots, the final goal of this research effort seeks to examine the effectiveness of implementing and applying a robot companion prototype for the purpose of full-time activity monitoring. The robot will follow the child and log his or her activities in real-time. It shall alert the guardians whenever the visibility is lost or a dangerous activity is detected. The guardian can choose to remotely connect to the robot via telepresence to assess the situation and verbally intervene, even when physically away from the child. This paper proposes the conceptual development of the said robot prototype, for the purpose of studying its reliability in preventing injuries and its effect on allaying parenting stress

    Robotics for assisting children with physical and cognitive disabilities

    No full text
    This chapter summarizes the findings of a study on robotics research and application for assisting children with disabilities between the years 2009 and 2013. The said disabilities include impairment of motor skills, locomotion, and social interaction that is commonly attributed to children suffering from Autistic Spectrum Disorders (ASD) and Cerebral Palsy (CP). As opposed to assistive technologies for disabilities that largely account for restoration of physical capabilities, disabled children also require dedicated rehabilitation for social interaction and mental health. As such, the breadth of this study covers existing efforts in rehabilitation of both physical and socio-psychological domains, which involve Human-Robot Interaction. Overviewed topics include assisted locomotion training, passive stretching and active movement rehabilitation, upper-extremity motor function, social interactivity, therapist-mediators, active play encouragement, as well as several life-long assistive robotics in current use. This chapter concludes by drawing attention to ethical and adoption issues that may obstruct the field's effectiveness
    corecore